Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Audiol Neurootol ; 2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38498993

RESUMO

INTRODUCTION: Difficulties in understanding speech in noise is the most common complaint of people with hearing impairment. Thus, there is a need for tests of speech-in-noise ability in clinical settings, which have to be evaluated for each language. Here, a reference dataset is presented for a quick speech-in-noise test in the French language (Vocale Rapide dans le Bruit, VRB; Leclercq, Renard & Vincent, 2018). METHODS: A large cohort (N=641) was tested in a nationwide multicentric study. The cohort comprised normal-hearing individuals and individuals with a broad range of symmetrical hearing losses. Short everyday sentences embedded in babble noise were presented over a spatial array of loudspeakers. Speech level was kept constant while noise level was progressively increased over a range of signal-to-noise ratios. The signal-to-noise ratio for which 50% of keywords could be correctly reported (Speech Reception Threshold, SRT) was derived from psychometric functions. Other audiometric measures were collected for the cohort, such as audiograms and speech-in-quiet performance. RESULTS: The VRB test was both sensitive and reliable, as shown by the steep slope of the psychometric functions and by the high test-retest consistency across sentence lists. Correlation analyses showed that pure tone averages derived from the audiograms explained 74% of the SRT variance over the whole cohort, but only 29% for individuals with clinically normal audiograms. SRTs were then compared to recent guidelines from the French Society of Audiology (Joly et al., 2021). Among individuals who would not have qualified for hearing aid prescription based on their audiogram or speech intelligibility in quiet, 18.4% were now eligible as they displayed SRTs in noise impaired by 3 dB or more. For individuals with borderline audiograms, between 20 dB HL and 30 dB HL, the prevalence of impaired SRTs increased to 71.4%. Finally, even though five lists are recommended for clinical use, a minute-long screening using only one VRB list detected 98.6% of impaired SRTs. CONCLUSION: The reference data suggest that VRB testing can be used to identify individuals with speech-in-noise impairment.

2.
PLoS Comput Biol ; 20(2): e1011849, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38315733

RESUMO

Sleep deprivation has an ever-increasing impact on individuals and societies. Yet, to date, there is no quick and objective test for sleep deprivation. Here, we used automated acoustic analyses of the voice to detect sleep deprivation. Building on current machine-learning approaches, we focused on interpretability by introducing two novel ideas: the use of a fully generic auditory representation as input feature space, combined with an interpretation technique based on reverse correlation. The auditory representation consisted of a spectro-temporal modulation analysis derived from neurophysiology. The interpretation method aimed to reveal the regions of the auditory representation that supported the classifiers' decisions. Results showed that generic auditory features could be used to detect sleep deprivation successfully, with an accuracy comparable to state-of-the-art speech features. Furthermore, the interpretation revealed two distinct effects of sleep deprivation on the voice: changes in slow temporal modulations related to prosody and changes in spectral features related to voice quality. Importantly, the relative balance of the two effects varied widely across individuals, even though the amount of sleep deprivation was controlled, thus confirming the need to characterize sleep deprivation at the individual level. Moreover, while the prosody factor correlated with subjective sleepiness reports, the voice quality factor did not, consistent with the presence of both explicit and implicit consequences of sleep deprivation. Overall, the findings show that individual effects of sleep deprivation may be observed in vocal biomarkers. Future investigations correlating such markers with objective physiological measures of sleep deprivation could enable "sleep stethoscopes" for the cost-effective diagnosis of the individual effects of sleep deprivation.


Assuntos
Privação do Sono , Voz , Humanos , Sono , Qualidade da Voz , Vigília
3.
JASA Express Lett ; 3(6)2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37379207

RESUMO

Online auditory experiments use the sound delivery equipment of each participant, with no practical way to calibrate sound level or frequency response. Here, a method is proposed to control sensation level across frequencies: embedding stimuli in threshold-equalizing noise. In a cohort of 100 online participants, noise could equate detection thresholds from 125 to 4000 Hz. Equalization was successful even for participants with atypical thresholds in quiet, due either to poor quality equipment or unreported hearing loss. Moreover, audibility in quiet was highly variable, as overall level was uncalibrated, but variability was much reduced with noise. Use cases are discussed.


Assuntos
Surdez , Percepção da Fala , Humanos , Limiar Auditivo/fisiologia , Percepção da Fala/fisiologia , Ruído/efeitos adversos
4.
J Exp Psychol Hum Percept Perform ; 49(7): 949-967, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37199950

RESUMO

Can we become aware of auditory stimuli retrospectively, even if they initially failed to reach awareness? Here, we tested whether spatial cueing of attention after a word had been played could trigger retrospective conscious access. Two sound streams were presented dichotically. One stream was attended for a primary task of speeded semantic categorization. The other stream included occasional target words, which had to be identified as a secondary task after the trial. We observed that cueing attention to the secondary stream improved identification accuracy, even when cueing occurred more than 500 ms after the target offset. In addition, such "retro-cueing" boosted the detection sensitivity and subjective audibility of the target. The effect was a perceptual one and not one based on enhancing or protecting conscious representations already available in working memory, as shown by quantitative models of the experimental data. In particular, the retro-cue did not gradually shift audibility but rather sharply changed the balance between fully audible and not audible trials. Together with remarkably similar results in vision, these results point to a previously unsuspected temporal flexibility of conscious access as a core feature of perception, across modalities. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Sinais (Psicologia) , Memória de Curto Prazo , Humanos , Estudos Retrospectivos , Estado de Consciência , Semântica
5.
Curr Biol ; 33(8): R296-R298, 2023 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-37098329

RESUMO

Almost universally, music uses scales consisting of a small number of notes. Could this increase the fitness of melodies for oral transmission? By reproducing the process online, a new study reveals how cognition, sound and culture may interact to shape music.


Assuntos
Música , Cognição , Som , Percepção Auditiva
6.
PLoS Comput Biol ; 19(1): e1010307, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36634121

RESUMO

Changes in the frequency content of sounds over time are arguably the most basic form of information about the behavior of sound-emitting objects. In perceptual studies, such changes have mostly been investigated separately, as aspects of either pitch or timbre. Here, we propose a unitary account of "up" and "down" subjective judgments of frequency change, based on a model combining auditory correlates of acoustic cues in a sound-specific and listener-specific manner. To do so, we introduce a generalized version of so-called Shepard tones, allowing symmetric manipulations of spectral information on a fine scale, usually associated to pitch (spectral fine structure, SFS), and on a coarse scale, usually associated timbre (spectral envelope, SE). In a series of behavioral experiments, listeners reported "up" or "down" shifts across pairs of generalized Shepard tones that differed in SFS, in SE, or in both. We observed the classic properties of Shepard tones for either SFS or SE shifts: subjective judgements followed the smallest log-frequency change direction, with cases of ambiguity and circularity. Interestingly, when both SFS and SE changes were applied concurrently (synergistically or antagonistically), we observed a trade-off between cues. Listeners were encouraged to report when they perceived "both" directions of change concurrently, but this rarely happened, suggesting a unitary percept. A computational model could accurately fit the behavioral data by combining different cues reflecting frequency changes after auditory filtering. The model revealed that cue weighting depended on the nature of the sound. When presented with harmonic sounds, listeners put more weight on SFS-related cues, whereas inharmonic sounds led to more weight on SE-related cues. Moreover, these stimulus-based factors were modulated by inter-individual differences, revealing variability across listeners in the detailed recipe for "up" and "down" judgments. We argue that frequency changes are tracked perceptually via the adaptive combination of a diverse set of cues, in a manner that is in fact similar to the derivation of other basic auditory dimensions such as spatial location.


Assuntos
Acústica , Percepção Auditiva , Estimulação Acústica , Sinais (Psicologia) , Julgamento , Percepção da Altura Sonora
7.
Proc Natl Acad Sci U S A ; 118(48)2021 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-34819369

RESUMO

To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.


Assuntos
Percepção Auditiva/fisiologia , Conscientização/fisiologia , Pupila/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Julgamento , Masculino , Incerteza , Percepção Visual/fisiologia
8.
J Acoust Soc Am ; 150(3): 1735, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598638

RESUMO

Stochastic sounds are useful to probe auditory memory, as they require listeners to learn unpredictable and novel patterns under controlled experimental conditions. Previous studies using white noise or random click trains have demonstrated rapid auditory learning. Here, we explored perceptual learning with a more parametrically variable stimulus. These "tone clouds" were defined as broadband combinations of tone pips at randomized frequencies and onset times. Varying the number of tones covered a perceptual range from individually audible pips to noise-like stimuli. Results showed that listeners could detect and learn repeating patterns in tone clouds. Task difficulty varied depending on the density of tone pips, with sparse tone clouds the easiest. Rapid learning of individual tone clouds was observed for all densities, with a roughly constant benefit of learning irrespective of baseline performance. Variations in task difficulty were correlated to amplitude modulations in an auditory model. Tone clouds thus provide a tool to probe auditory learning in a variety of task-difficulty settings, which could be useful for clinical or neurophysiological studies. They also show that rapid auditory learning operates over a wide range of spectrotemporal complexity, essentially from melodies to noise.


Assuntos
Aprendizagem , Ruído , Estimulação Acústica , Percepção Auditiva , Ruído/efeitos adversos , Som
9.
J Acoust Soc Am ; 150(3): 1934, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598651

RESUMO

Learning about new sounds is essential for cochlear-implant and normal-hearing listeners alike, with the additional challenge for implant listeners that spectral resolution is severely degraded. Here, a task measuring the rapid learning of slow or fast stochastic temporal sequences [Kang, Agus, and Pressnitzer (2017). J. Acoust. Soc. Am. 142, 2219-2232] was performed by cochlear-implant (N = 10) and normal-hearing (N = 9) listeners, using electric or acoustic pulse sequences, respectively. Rapid perceptual learning was observed for both groups, with highly similar characteristics. Moreover, for cochlear-implant listeners, an additional condition tested ultra-fast electric pulse sequences that would be impossible to represent temporally when presented acoustically. This condition also demonstrated learning. Overall, the results suggest that cochlear-implant listeners have access to the neural plasticity mechanisms needed for the rapid perceptual learning of complex temporal sequences.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Acústica , Testes Auditivos
10.
J Neurosci Methods ; 362: 109297, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34320410

RESUMO

BACKGROUND: Many scientific fields now use machine-learning tools to assist with complex classification tasks. In neuroscience, automatic classifiers may be useful to diagnose medical images, monitor electrophysiological signals, or decode perceptual and cognitive states from neural signals. However, such tools often remain black-boxes: they lack interpretability. A lack of interpretability has obvious ethical implications for clinical applications, but it also limits the usefulness of these tools to formulate new theoretical hypotheses. NEW METHOD: We propose a simple and versatile method to help characterize the information used by a classifier to perform its task. Specifically, noisy versions of training samples or, when the training set is unavailable, custom-generated noisy samples, are fed to the classifier. Multiplicative noise, so-called "bubbles", or additive noise are applied to the input representation. Reverse correlation techniques are then adapted to extract either the discriminative information, defined as the parts of the input dataset that have the most weight in the classification decision, and represented information, which correspond to the input features most representative of each category. RESULTS: The method is illustrated for the classification of written numbers by a convolutional deep neural network; for the classification of speech versus music by a support vector machine; and for the classification of sleep stages from neurophysiological recordings by a random forest classifier. In all cases, the features extracted are readily interpretable. COMPARISON WITH EXISTING METHODS: Quantitative comparisons show that the present method can match state-of-the art interpretation methods for convolutional neural networks. Moreover, our method uses an intuitive and well-established framework in neuroscience, reverse correlation. It is also generic: it can be applied to any kind of classifier and any kind of input data. CONCLUSIONS: We suggest that the method could provide an intuitive and versatile interface between neuroscientists and machine-learning tools.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Máquina de Vetores de Suporte
11.
Nat Commun ; 12(1): 1149, 2021 02 19.
Artigo em Inglês | MEDLINE | ID: mdl-33608533

RESUMO

An outstanding challenge for consciousness research is to characterize the neural signature of conscious access independently of any decisional processes. Here we present a model-based approach that uses inter-trial variability to identify the brain dynamics associated with stimulus processing. We demonstrate that, even in the absence of any task or behavior, the electroencephalographic response to auditory stimuli shows bifurcation dynamics around 250-300 milliseconds post-stimulus. Namely, the same stimulus gives rise to late sustained activity on some trials, and not on others. This late neural activity is predictive of task-related reports, and also of reports of conscious contents that are randomly sampled during task-free listening. Source localization further suggests that task-free conscious access recruits the same neural networks as those associated with explicit report, except for frontal executive components. Studying brain dynamics through variability could thus play a key role for identifying the core signatures of conscious access, independent of report.


Assuntos
Encéfalo/fisiologia , Estado de Consciência/fisiologia , Estimulação Acústica , Adolescente , Adulto , Percepção Auditiva/fisiologia , Comportamento , Neurociência Cognitiva , Eletroencefalografia , Feminino , Humanos , Masculino , Percepção Visual/fisiologia , Adulto Jovem
12.
Curr Biol ; 29(19): R927-R929, 2019 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-31593668

RESUMO

Do members of a remote Amazonian tribe and Boston-trained musicians share similarities in their mental representations of auditory pitch? According to an impressive new set of psychoacoustic evidence they do, a finding which highlights the universal importance of relative pitch patterns.


Assuntos
Música , Canto , Percepção Auditiva , Boston , Percepção da Altura Sonora
13.
Sci Rep ; 8(1): 14548, 2018 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-30267021

RESUMO

Perceptual organisation must select one interpretation from several alternatives to guide behaviour. Computational models suggest that this could be achieved through an interplay between inhibition and excitation across competing types of neural population coding for each interpretation. Here, to test for such models, we used magnetic resonance spectroscopy to measure non-invasively the concentrations of inhibitory γ-aminobutyric acid (GABA) and excitatory glutamate-glutamine (Glx) in several brain regions. Human participants first performed auditory and visual multistability tasks that produced spontaneous switching between percepts. Then, we observed that longer percept durations during behaviour were associated with higher GABA/Glx ratios in the sensory area coding for each modality. When participants were asked to voluntarily modulate their perception, a common factor across modalities emerged: the GABA/Glx ratio in the posterior parietal cortex tended to be positively correlated with the amount of effective volitional control. Our results provide direct evidence implicating that the balance between neural inhibition and excitation within sensory regions resolves perceptual competition. This powerful computational principle appears to be leveraged by both audition and vision, implemented independently across modalities, but modulated by an integrated control process.


Assuntos
Percepção Auditiva , Lobo Parietal/fisiologia , Percepção Visual , Adulto , Feminino , Ácido Glutâmico/análise , Ácido Glutâmico/metabolismo , Glutamina/análise , Glutamina/metabolismo , Humanos , Masculino , Pessoa de Meia-Idade , Inibição Neural , Ácido gama-Aminobutírico/análise , Ácido gama-Aminobutírico/metabolismo
15.
J Acoust Soc Am ; 143(6): 3665, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29960504

RESUMO

Using a same-different discrimination task, it has been shown that discrimination performance for sequences of complex tones varying just detectably in pitch is less dependent on sequence length (1, 2, or 4 elements) when the tones contain resolved harmonics than when they do not [Cousineau, Demany, and Pessnitzer (2009). J. Acoust. Soc. Am. 126, 3179-3187]. This effect had been attributed to the activation of automatic frequency-shift detectors (FSDs) by the shifts in resolved harmonics. The present study provides evidence against this hypothesis by showing that the sequence-processing advantage found for complex tones with resolved harmonics is not found for pure tones or other sounds supposed to activate FSDs (narrow bands of noise and wide-band noises eliciting pitch sensations due to interaural phase shifts). The present results also indicate that for pitch sequences, processing performance is largely unrelated to pitch salience per se: for a fixed level of discriminability between sequence elements, sequences of elements with salient pitches are not necessarily better processed than sequences of elements with less salient pitches. An ideal-observer model for the same-different binary-sequence discrimination task is also developed in the present study. The model allows the computation of d' for this task using numerical methods.

16.
Neuroscience ; 389: 118-132, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29577997

RESUMO

Perception deals with temporal sequences of events, like series of phonemes for audition, dynamic changes in pressure for touch textures, or moving objects for vision. Memory processes are thus needed to make sense of the temporal patterning of sensory information. Recently, we have shown that auditory temporal patterns could be learned rapidly and incidentally with repeated exposure [Kang et al., 2017]. Here, we tested whether rapid incidental learning of temporal patterns was specific to audition, or if it was a more general property of sensory systems. We used a same behavioral task in three modalities: audition, touch, and vision, for stimuli having identical temporal statistics. Participants were presented with sequences of acoustic pulses for audition, motion pulses to the fingertips for touch, or light pulses for vision. Pulses were randomly and irregularly spaced, with all inter-pulse intervals in the sub-second range and all constrained to be longer than the temporal acuity in any modality. This led to pulse sequences with an average inter-pulse interval of 166 ms, a minimum inter-pulse interval of 60 ms, and a total duration of 1.2 s. Results showed that, if a random temporal pattern re-occurred at random times during an experimental block, it was rapidly learned, whatever the sensory modality. Moreover, patterns first learned in the auditory modality displayed transfer of learning to either touch or vision. This suggests that sensory systems may be exquisitely tuned to incidentally learn re-occurring temporal patterns, with possible cross-talk between the senses.


Assuntos
Percepção Auditiva , Memória , Percepção do Tato , Percepção Visual , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo , Transferência de Experiência , Adulto Jovem
17.
J Acoust Soc Am ; 142(4): 2219, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092589

RESUMO

The acquisition of auditory memory for temporal patterns was investigated. The temporal patterns were random sequences of irregularly spaced clicks. Participants performed a task previously used to study auditory memory for noise [Agus, Thorpe, and Pressnitzer (2010). Neuron 66, 610-618]. The memory for temporal patterns displayed strong similarities with the memory for noise: temporal patterns were learnt rapidly, in an unsupervised manner, and could be distinguished from statistically matched patterns after learning. There was, however, a qualitative difference from the memory for noise. For temporal patterns, no memory transfer was observed after time reversals, showing that both the time intervals and their order were represented in memory. Remarkably, learning was observed over a broad range of time scales, which encompassed rhythm-like and buzz-like temporal patterns. Temporal patterns present specific challenges to the neural mechanisms of plasticity, because the information to be learnt is distributed over time. Nevertheless, the present data show that the acquisition of novel auditory memories can be as efficient for temporal patterns as for sounds containing additional spectral and spectro-temporal cues, such as noise. This suggests that the rapid formation of memory traces may be a general by-product of repeated auditory exposure.


Assuntos
Percepção Auditiva , Memória , Percepção do Tempo , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Humanos , Aprendizagem , Masculino , Transferência de Experiência
18.
Sci Rep ; 7(1): 11526, 2017 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-28912437

RESUMO

In human listeners, the temporal voice areas (TVAs) are regions of the superior temporal gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises, and animal cries. One interpretation of the TVA's selectivity is based on low-level acoustic cues: compared to control sounds, vocal sounds may have stronger harmonic content or greater spectrotemporal complexity. Here, we show that the right TVA remains selective to the human voice even when accounting for a variety of acoustical cues. Using fMRI, single vowel stimuli were contrasted with single notes of musical instruments with balanced harmonic-to-noise ratios and pitches. We also used "auditory chimeras", which preserved subsets of acoustical features of the vocal sounds. The right TVA was preferentially activated only for the natural human voice. In particular, the TVA did not respond more to artificial chimeras preserving the exact spectral profile of voices. Additional acoustic measures, including temporal modulations and spectral complexity, could not account for the increased activation. These observations rule out simple acoustical cues as a basis for voice selectivity in the TVAs.


Assuntos
Percepção Auditiva , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
19.
Nat Commun ; 8(1): 179, 2017 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-28790302

RESUMO

Sleep and memory are deeply related, but the nature of the neuroplastic processes induced by sleep remains unclear. Here, we report that memory traces can be both formed or suppressed during sleep, depending on sleep phase. We played samples of acoustic noise to sleeping human listeners. Repeated exposure to a novel noise during Rapid Eye Movements (REM) or light non-REM (NREM) sleep leads to improvements in behavioral performance upon awakening. Strikingly, the same exposure during deep NREM sleep leads to impaired performance upon awakening. Electroencephalographic markers of learning extracted during sleep confirm a dissociation between sleep facilitating memory formation (light NREM and REM sleep) and sleep suppressing learning (deep NREM sleep). We can trace these neural changes back to transient sleep events, such as spindles for memory facilitation and slow waves for suppression. Thus, highly selective memory processes are active during human sleep, with intertwined episodes of facilitative and suppressive plasticity.Though memory and sleep are related, it is still unclear whether new memories can be formed during sleep. Here, authors show that people could learn new sounds during REM or light non-REM sleep, but that learning was suppressed when sounds were played during deep NREM sleep.


Assuntos
Aprendizagem/fisiologia , Memória/fisiologia , Sono REM/fisiologia , Som , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Consolidação da Memória , Sono/fisiologia , Adulto Jovem
20.
Nat Commun ; 8: 15027, 2017 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-28425433

RESUMO

A perceptual phenomenon is reported, whereby prior acoustic context has a large, rapid and long-lasting effect on a basic auditory judgement. Pairs of tones were devised to include ambiguous transitions between frequency components, such that listeners were equally likely to report an upward or downward 'pitch' shift between tones. We show that presenting context tones before the ambiguous pair almost fully determines the perceived direction of shift. The context effect generalizes to a wide range of temporal and spectral scales, encompassing the characteristics of most realistic auditory scenes. Magnetoencephalographic recordings show that a relative reduction in neural responsivity is correlated to the behavioural effect. Finally, a computational model reproduces behavioural results, by implementing a simple constraint of continuity for binding successive sounds in a probabilistic manner. Contextual processing, mediated by ubiquitous neural mechanisms such as adaptation, may be crucial to track complex sound sources over time.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Audição/fisiologia , Som , Estimulação Acústica , Algoritmos , Humanos , Julgamento , Magnetoencefalografia , Modelos Teóricos , Percepção da Altura Sonora/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...